Ninth Workshop on Membrane Computing ( WMC 9 ) Edinburgh

نویسندگان

  • Pierluigi Frisco
  • David W. Corne
  • Gheorghe Păun
  • Oscar H. Ibarra
چکیده

computing devices (the characterization was then improved and more exten-sively explained by W. Sieg in [34], [35]) and by N. De Pisapia in [11] to model abstractneural nets. The cortex models presented in the papers quoted above are described in the language ofhereditarily finite sets in Sections 2 and 3 of the present paper, where the links betweenthe mentioned cortex aspects are also discussed. The paper is aimed among others to be a hint and a brief survey of some methods ofcortex modeling and related topics for those who will approach problem B in [29] ofapplications of spiking neural P systems (introduced in [22]) in neurology, in particularin modeling cortex behaviour and computations inspired by this behaviour. The basic concepts concerning hereditarily finite sets are introduced and explained inthe following definitions and comments. Definition 1 We recall now the notion of a hereditarily finite set used in [17]. For a po-tentially infinite set L of labels or names which are urelements, i.e., they are not (treatedas) sets themselves, we define inductively a family of sets HFi for natural numbers i ≥ 0such that HF0 = ∅,HFi+1 = the set of nonempty finite subsets of L∪HFi. The elements of the union HF ={HFi | i ≥ 0} ∪ {∅} are called hereditarily finitesets over L or hereditarily finite sets with urelements in L, or simply hereditarily finitesets if there is no risk of confusion. For x ∈ HF we define its weak transitive closure WTC(x) and support supp(x) by WTC(x) =⋃{WTC(y) |y ∈ x and y ∈ HF}∪ {x}supp(x) = (x ∩ L) ∪⋃{supp(y) |y ∈ x and y ∈ HF},and the depth of x is defined to be the smallest natural number i for which x ∈ HFi. We write depth(x) to denote the depth of a hereditarily finite set x. Explanatory Comments 1 One interprets a hereditarily finite set x of depth greaterthan 1 and the corresponding sets WTC(x) and supp(x) in the following way. The urelements belonging to supp(x) are elementary or indecomposablepieces of x, the elements of WTC(x) − {x} are composed pieces of x. The set x it-self is assembled, or composed, or aggregated successively from these two kinds ofpieces elementary and complex one according to membership relation ∈ restricted to Mathematical modeling of cortex373 WTC(x) ∪ supp(x). Any (pictorial or verbal) presentation of this restriction of ∈ mayserve as a plan or an algorithm of an assembly of x. If x is a model of a system, e.g.,an organ in biology, the elements of supp(x) correspond to indecomposable pieces ofthis system and the elements of WTC(x)− {x} correspond to composed pieces of thissystem. Explanatory Comments 2 For a hereditarily finite set x the collection{supp(y) |y ∈ WTC(x)} can represent spatial aspect of x as a collection of setsof regions of a space (on topological level of abstraction) in which pieces of x areplaced. Hence the assignment y 7→ supp(y) (y ∈ WTC(x)) links assembly aspect of xrepresented by WTC(x) with spatial aspect of x represented on topological level ofabstraction by {supp(y) |y ∈ WTC(x)}. 2 Columnar and areal organization of cortex In this section we discuss anatomical, spatial, and functional organization aspects ofcortex with a regard to conceptual approach to cortex models formation. We show howcortex can be modelled in terms of hereditarily finite sets to provide linking of theseaspects. We begin the discussion with anatomical aspect of cortex. The columnar and areal or-ganization of cortex described in [27], [39], discussed also in [8], [10], [23], [24], andthe proposed interpretation of hereditarily finite sets in Explanatory Comments 1 giverise to the following main example of hereditarily finite sets. Main example 1 Anatomical assembly of cortex with respect to its columnar and arealorganization can be modelled by hereditarily finite set xcortex of depth 4 such that • xcortex itself is the set of all areas of cortex,• areas of cortex are non-empty sets of hypercolumns of cortex,• hypercolumns are non-empty sets of columns of cortex,• columns are non-empty sets of neurons of cortex, where neurons are treated asurelements. We will discuss hypothetical properties of xcortex by using the following auxiliary no-tions. Definition 2 A collection of sets forms a tree if and only if, for any two sets that belongto the collection, either one is wholly contained in the other, or else they are whollydisjoint. See [3]. 374Mathematical modeling of cortex Remark 1 A lattice theoretical treatment of the above defined trees relates them in [25]to ultrametric spaces, where a relationship is described by an isomorphism of appro-priate categories mentioned directly in the title of [25]. For a survey of applications ofultrametric spaces in biology and physics see e.g. [31]. Definition 3 For two elements x, y of a collection A of sets we say that x is an imme-diate subset of y with respect to A if x is a proper subset of y and there does not exist aset z in A such that x is a proper subset of z and z is a proper subset of y. Definition 4 A hereditarily finite set x is a tree-like hereditarily finite set if the assign-ment y 7→ supp(y) (y ∈ WTC(x)) is an injection and {supp(y) |y ∈ WTC(x)} is atree. Definition 5 We say that a hereditarily finite set x is a regular hereditarily finite set ifthe assignment y 7→ supp(y) (y ∈ WTC(x)) is an injection and z ∈ y implies thatsupp(z) is an immediate subset of supp(y) with respect to {supp(y) |y ∈ WTC(x)}for all y, z ∈ WTC(x). Definition 6 We define depth-homogeneous hereditarily finite sets by induction in thefollowing way: • a depth-homogeneous hereditarily finite set of depth 1 is a finite non-empty set ofurelements,• a depth-homogeneous hereditarily finite set of depth n+1 is a finite non-empty setof depth-homogeneous hereditarily finite sets of depth n. Artificial example 1 We present an example of depth-homogeneous hereditarily finitesets of depth 3: x ={{{1, 5}, {1, 4}, {1, 3}}, {{2, 4}, {1, 3}}, {{2, 4}, {2, 5}}}is a regularhereditarily finite set but the collection {supp(y) |y ∈ WTC(x)} is not a tree andhence x is not a tree-like hereditarily finite set. Theorem 2.1 Every tree-like hereditarily finite set is a regular hereditarily finite setbut the converse is not true, i.e., there exist regular hereditarily finite sets which are nottree-like hereditarily finite sets. Proof One proves the theorem by induction on depth of hereditarily finite sets. Theset x in Artificial Example 1 is a regular hereditarily finite set which is not a tree-likehereditarily finite set.2 Mathematical modeling of cortex375 Comment 1 A collection of sets which is a tree is a form of an idealized spatial (ontopological level of abstraction) organization of a system mostly to simplify system anal-ysis or system recurrent construction than to be a result of an evolutive natural processof spatial adaptation of a system. This observation due to Ch. Alexander is containedin his paper [3] concerning applications of tree structures in city planning. Thus sinceby Theorem 2.1 the notion of a regular hereditarily finite set is less restrictive than thenotion of a tree-like hereditarily finite set, we propose the following hypothesis. Hypothesis 1 The hereditarily finite set xcortex modelling cortex is a depth-homogeneous,regular hereditarily finite set. The assignment y 7→ supp(y) (y ∈ WTC(x)) establishes the links between anatomicalassembly aspect and spatial aspect of cortex for x =xcortex, see Explanatory Comments2. Functional organization aspect of cortex is modelled in [15], [39], [20], [21] by us-ing more or less explicitly a graph of pathway connections between cortex areas. Thevertices of that graph of pathway connections are areas (or their labels, or names), theedges are those ordered pairs of areas which are determined, among others, by func-tions μy,y′ : supp(y)× supp(y) → R ((y, y) ∈ xcortex × xcortex) valued in the set Rof real numbers, where the values μy,y′(n, n) describe strength of synaptic connectionbetween neurons n ∈ supp(y) and n ∈ supp(y). The graph of pathway connec-tions determined by functions μy,y′ ((y, y) ∈ xcortex × xcortex) describing strength ofsynaptic connections links functional organization aspect of cortex with its anatomicaland spatial aspects. 3 Multilevel (nested) neural networks and networks ofnetworks In this section we discuss certain topic computational models of cortex which are basedon the concept of a system of nested neural networks described by J. P. Sutton et al.in [37] and on the idea of a network of networks due to J. A. Anderson and J. P. Suttonpresented in [26], [5], [6]. We describe the concept of a system of nested neural networks and the idea of a networkof networks in terms of hereditarily finite sets by using the following notion. Definition 7 For a natural number n > 1 an n-level network N is given by • a depth-homogeneous tree-like hereditarily finite set x of depth n, called theunderlying hereditarily finite set of N, 376Mathematical modeling of cortex • a family of state interaction functions13μyz,z′ : supp(z)×supp(z′) → R ((z, z) ∈(y × y) − ∆̇y and y ∈ WTC(x) with depth(y) > 1) valued in the set R ofreal numbers, where ∆̇y = {(z, z) | z ∈ y} for depth(y) > 2 and ∆̇y = ∅ fordepth(y) = 2,• two functions σ, θ : supp(x) → R which are state function and thresholdfunction of N, respectively. The underlying hereditarily finite set x of an n-level network N describes hierarchicalorganization of N, where the elements of WTC(x) correspond to clusters, see [37],and indecomposable units of N are urelements belonging to supp(x), e.g., neuralelements themselves or elementary processors. The state interaction functionsμyz,z′ de-scribe the strength of synaptic connections between neurons like functions μy,y′ in Sec-tion 2. Corollary 1 Let N be an n-level network. Then • for n = 2 the network N is a network of networks understood as in [26], [5],• for n > 2 and y ∈ x one obtains an (n − 1)-level network N[y] determinedby y such that y is the underlying hereditarily finite set of N[y], the family of stateinteraction function of N[y] is the restriction of the family of state interaction func-tions of N to the case of WTC(y), and the state function with threshold functionof N[y] are the restrictions of the state function and the threshold function of N,respectively, to the set supp(y). Thus for n > 2 an n-level network N is a network of (n − 1)-level networks N[y](y ∈ x) of (n − 2)-level networks N[y][z] (z ∈ y), etc., where N[y] is immediatelynested in N and N[y][z] is immediately nested in N[y]. Proof The corollary is an immediate consequence of the definition of an n-level net-work.2 Theorem 3.1 For a natural number n > 1, let N be an n-level network. Then thereexists a unique function μ : supp(x) × supp(x) → R such that μ ↾ supp(z) ×supp(z) =μyz,z′ for all (z, z ) ∈ y × y − ∆̇y , y ∈ WTC(x), where μ ↾ supp(z)×supp(z) denotes the restriction of μ to the set supp(z)× supp(z). Proof We prove the theorem by induction on n.2 13corresponding to state interaction matrices in [5] Mathematical modeling of cortex377 Corollary 2 For a natural number n > 1 an n-level network N behaves globally asa usual neural network whose state interaction function is that μ which is given inTheorem 3.1. Proof The corollary is an immediate consequence of Theorem 3.1.2 Remark 2 Assume that we are given a neural network whose hierarchical organizationis represented by a depth-homogeneous tree-like hereditarily finite set x, where supp(x)is the set of neurons of the network and {supp(y) |y ∈ WTC(x)} is the collectionof network regions. The network may contain neighbouring network regions supp(z),supp(z) with {z, z′} ⊂ y ∈ WTC(x) for some y, where for all neurons n ∈ supp(z)and n ∈ supp(z) there exists a synaptic connection between n and n. And vice versa,the network may contain regions supp(z), supp(z) which are far one to another suchthat for all neurons n ∈ supp(z) and n ∈ supp(z) there does not exist any synapticconnection. The above two cases of synaptic connection and its lack give rise to thefollowing definition. Definition 8 By an n-level network with neighbourhood graphs we mean an n-levelnetwork N except it is completed by a new data which are neighbourhood graphs G(y ∈ WTC(x) with depth(y) > 1) and the family of state interaction functions isrestricted by the neighbourhood graphs such that • the set V (G) of vertices of G is y itself, the set E(G) of edges of G is suchthat E(G) ⊂ (y × y)− ∆̇y for ∆̇y given as in Definition 7,• the family of state interaction functionsμyz,z′ : supp(z)× supp(z) → R is deter-mined by neighbourhood graphs such that (z, z) ∈ E(G) and y ∈ WTC(x)with depth(y) > 1. The neighbourhood graphs are interpreted such that for (z, z) ∈ (y× y)− ∆̇y but with(z, z) /∈ E(G) certainly for all neurons n ∈ supp(z) and n ∈ supp(z) there doesnot exist any synaptic connection from n into n. For an n-level network with neighbourhood graphs its neighbourhood graphs can beillustrated by a Venn diagram of frontier disjoint balls, where the balls represent the el-ements of the weak transitive closure of the underlying hereditarily finite set of the net-work and the edges of the neighbourhood graphs are represented by the arcs connectingthe frontiers of the corresponding balls, e.g. Figure 10 in the paper [5] illustrates theneighbourhood graph of corresponding 2-level network, see also Figure 1 in [37] illus-trating neighbourhood graphs of a 3-level network. Corollary 3 Let N be an n-level network with neighbourhood graphsG (y ∈ WTC(x)with depth(y) > 1). Then N determines an inductive construction of a family of graphs 378Mathematical modeling of cortex Gy (y ∈ WTC(x)) with GxN as a final result such that the set V (Gy) of vertices ofGy and the set E(Gy) of edges of Gy are given by • V (Gy) = supp(y) for all y ∈ WTC(x),• for y ∈ WTC(x) with depth(y) = 1 E(Gy) ={(a, b) ∈ V(Gy)× V (Gy) |μ ′y,y(a, b) > 0}if (y, y) ∈ E(Gy′),otherwise E(Gy) = ∅, where y is that unique element of WTC(x) for whichy ∈ y,• for y ∈ WTC(x) with depth(y) > 1 E(Gy) =⋃{{(a, b) ∈ V(Gy)× V (Gy)|μyz,z′ (a, b) > 0}∣∣ (z, z) ∈ E(G)and z 6= z}∪⋃{E(Gz) | z ∈ y}. Proof The corollary is an immediate consequence of the definition of an n-level net-work with neighbourhood graphs.2 Remark 3 The construction in Corollary 3 is a generalization of the constructions ofan n-dimensional hypercube from the copies of hypercubes of dimensions less than nfor n > 3. For some mathematical treatment of n-dimensional hypercubes see e.g. [12]and for their applications see e.g. [33]. Some of the constructions of higher dimen-sional hypercubes from the copies of hypercubes of low dimensions were pointed out byRichard Feynman and Tamiko Thiel to apply these constructions among others for vi-sual presentation of ‘constructive’ structure of higher dimensional hypercubes in [38].The drawings of structures of 6-D hypercube, 9-D hypercube, and 12-D hypercube inChapter II of [38] can be treated intuitively as illustrations of those 2-level, 3-level, and4-level networks with neighbourhood graphs, respectively, which determine the con-structions (like in Corollary 3) of 6-D hypercube, 9-D hypercube and 12-D hypercubefrom the copies of 3-D cube, respectively. The neighbourhood graphs of those 2-level,3-level and 4-level networks are isomorphic to 3-D cube. Remark 4 This remark is a hint for those who will approach modeling cortex be-haviour by application of spiking neural P systems. The large number (about 10)of neurons in human cortex and various structural cortex organizations (anatomical,physiological, and functional one) suggest to approach modeling of cortex behaviourby application of (higher level) networks of spiking neural P systems in a similar wayas networks of networks are applied, cf. [6]. It is worth to investigate those spikingneural P systems whose sets of neurons and synapses form synfire braids and chains,cf. [7], [14]. The discussion illustrated by Figure 16 in Section 5 of [1] suggests that aconcept of an n-level network of spiking neural P systems with their sets of neurons and Mathematical modeling of cortex379 synapses forming synfire chains may appear useful to model compositionality ability ofbrain by binding synfire chains. Namely, one may consider those higher level constructs(similar to construct illustrated by Figure 16 in Section 5 in [1]) which are describedin terms of spiking neural P systems whose neurons and synapses form an n-level net-work with neighbourhood graphs, where the levels of its hierarchical organizationation(with respect to nesting relation in Corollary 1) may correspond to the abstraction levelsof image perceiving and processing from pixel-level, then local feature level, structure(edge) level, object level, to object set level and scene characterization level. It gives riseto an open problem, which is a variant of Temporal Correlation Hypothesis of VisualFeature Integration formulated e.g. in [18], whether hierarchical organizationation ofthose higher level constructs is determined by spiking trains reached in those learningprocesses, e.g. in [19], which respect hierarchical organizationation like in [13]. Conclusion. The common ground language for linking various aspects of cortex mod-els, proposed in the paper, concerns mainly the models respecting columnar and arealorganization of cortex but neuroscience and related fields contain a lot of other modelsdifferent from those which are based on columnar and areal cortex organization. Look-ing forwards, the proposed common ground language could be a node of a future net(or web) of common ground languages aimed to provide a discourse between meth-ods, treatments, and approaches, all respecting various motivations, to cortex modelsformation from: • cortex models due to E. Bienestock in [7] based on synfire chains and braids,• the models in [32] extending the hierarchical organization of cortex pointed out byD. H. Hubel and T. N. Wiesel, where there are specified the level of simple cells(neurons), the level of complex cells, and the level of hypercomplex cells such thata complex cell responds to some assembly of simple cells, and a hypercomplexcell responds to some assembly of complex cells,• neural net models using tensor product in [30], [36], and motivated by problems oflinguistics,• the models of brain memory inspired by the idea of spin glass models in physics,see [31] and Introduction to [37], to the models inspiring construction of distributed systems realizing massively parallelcomputations, cf. [6], [26], and neoperceptron due to K. Fukushima [16]. That future net could be a platform for mutual inspiration between neuroscience andother disciplines and fields of research and engineering, e.g., city planning, where gen-eral aspects of harmony-seeking computations and evolutive processes of spatial adap-tation are discussed, cf. [2]. Maybe one could form that future net of common ground languages in a similar wayto the networks of patterns forming pattern languages, due to Ch. Alexander, discussed 380Mathematical modeling of cortex in [4]. It is worth to point here that the idea of pattern languages has already inspiredcomputer scientists to similar constructions in the area of object programming, cf. [9].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Membrane Computing, 5th International Workshop, WMC 2004, Milan, Italy, June 14-16, 2004, Revised Selected and Invited Papers

What do you do to start reading membrane computing 5th international workshop wmc 2004 milan italy june 14 16 2004 revised selected and invited papers? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to rea...

متن کامل

Membrane Computing, 6th International Workshop, WMC 2005, Vienna, Austria, July 18-21, 2005, Revised Selected and Invited Papers

Read more and get great! That's what the book enPDFd membrane computing 6th international workshop wmc 2005 vienna austria july 18 21 2005 revised selected and invited papers lecture notes in computer science and general issues will give for every reader to read this book. This is an on-line book provided in this website. Even this book becomes a choice of someone to read, many in the world als...

متن کامل

Proceedings of the Ninth International Workshop on Constraint Handling Rules September 4 th , 2012 Budapest , Hungary

This volume contains the papers presented at CHR 2012, the 9th International Workshop on Constraint Handling Rules held on September 4th, 2012 in Budapest, at the occasion of ICLP 2012. This workshop was the ninth in a series of annual CHR workshops. It means to bring together in an informal setting, people involved in research on all matters involving CHR, in order to promote the exchange of i...

متن کامل

Contributions to AI 4 FM 2015

This report contains a collection of abstracts for all the talks in the 6th International Workshop on the use of AI in Formal Methods (AI4FM 2015). The main goal of the AI4FM workshop series is to bring together researchers from formal methods, automated reasoning and AI; aiming to address the issue of how AI can be used to support the formal software development and verification process, inclu...

متن کامل

7th UK Computer and Telecommunications Performance Engineering Workshop, Edinburgh, 22-23 July 1991

Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some 7th uk computer and telecommunications performance engineering workshop edinburgh 22 23 july 1991 workshops in computing references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008